AMD: Southern Islands (7*** series) Speculation/ Rumour Thread

This would be bringing us back to the microstutter argument all over again. It's there. Some people see it, some don't. The fact is on the graph I linked, the 7970 is obviously more "messy" than either of the other two single GPU cards that it's compared against.

You could use the microstutter argument if spikes had repeatable pattern. That way we would get good average framerate but crappy sense of fluidity.
 
Exactly. In this case a single-frame spike means, that one frame isn't synchronized, so the frame exists, but it doesn't improve subjective fluency. One spike per second means one lost frame per second (still in terms of subjective fluency), so despite the game runs at 36 fps, it's subjectively only as fluent as 35 fps game-play. I doubt ~1 fps difference can be noticed during real game-play.

It's not exactly the same, imo, as it means that you can see a microstutter because at some points the image stays longer than it should for a smooth experience. So it's not exactly equivalent to 35 fps.
 
It's not exactly the same, imo, as it means that you can see a microstutter because at some points the image stays longer than it should for a smooth experience.

No, it means some can see microstutter, and even for those who can, single spike isn't necessarily noticeable yet, or at least not enough to affect anything in real life
 
No, it means some can see microstutter, and even for those who can, single spike isn't necessarily noticeable yet, or at least not enough to affect anything in real life

Yes, but that was not what I was referring to. I was just merely pointing out that those two cases are not equivalent. Not referring to whether you can see it or not.
 
This would be bringing us back to the microstutter argument all over again. It's there. Some people see it, some don't. The fact is on the graph I linked, the 7970 is obviously more "messy" than either of the other two single GPU cards that it's compared against.

And if we take some other game, like Skyrim, the tables are turned and in a LOT more dramatic fashion too
skyrim-nv.gif


GTX480 stuttering like there's no tomorrow most of the bench, and 580 a lot more than 7970 too, while 7970 only has short period, and even then less stuttery
 
For a while, a particularity of the Radeons that really bothered me was their minimal framerates in games (something that is far too rarely considered in reviews). It seems that Scott's review has finally put those concerns to rest.

Not sure if the VLIW architecture itself or the drivers were to blame but, even with the less mature GCN drivers, AMD has made huge strides there.

Interesting to see the frame rate in milliseconds in that review you linked. Something that I've not seen much of lately. Needless to say that the 7970 is doing much better then previous generations. It would have been nice to see both their results and performance overlay for BF3.
 
This would be bringing us back to the microstutter argument all over again.
Hardly. AFR microstuttering means continuously irregular distribution of frame-to-frame time. E.g. 10 ms - 90 ms - 10 ms - 90 ms etc. It results in subjectively halved frame-rate, because every second frame doesn't improve subjective fluency. E.g. 50 fps with significant microstutterig can result in fluency similar to 25 fps - worst case. (Real framerate minus out-of sync frames per second = subjective framerate). Nothing more, nothing less.

The Crysis 2 benchmark doesn't show, that every second frame is out of sync. It shows, that one of tens / hundreds frames is out of sync (~effectively lost).

Majority of those persons who "see" microstuttering in fact describes different problems, like jumps, (macro-)stuttering, borked distribution of frames (typical for early SLI/CF days) etc. But true microstuttering can't be seen - it can be percepted as subjectively lower framerate.

You said that ~1 fps (effective) loss of the HD 7970 can be seen by some people, but on the other hand you refuse to admit, that 20-30% fps advantage of the HD 7970 (over GTX 580) can be any useful...
 
Wow. I really like that method of benchmark graphing :oops:
Tells you so much more about the performance than the normal fps bars :yes:
 
Wow. I really like that method of benchmark graphing :oops:
Tells you so much more about the performance than the normal fps bars :yes:

Though I'd prefer if it was translated to "realtime fps" so to say, as it's really, really simple job in excel, it's far more userfriendly to read than frametimes
 
Frames per second doesnt much helping.. stuttering lower than in a second is annoying thats the whole point they make graphs in miliseconds not like [H] style..

the reason of high stuttering on nv cards in those skyrim graphs is maybe because of nv drivers need more cpu power..
 
Hey all, haven't been here in a while, but Tahiti has rekindled my passion for 3D graphics and hardware. I'm planning on doing a report on Tahiti and modern games with the same kind of insight as my bandwidth analysis from a few years ago.

For now though, I thought I'd make a quick remark on the TechReport review:
This would be bringing us back to the microstutter argument all over again. It's there. Some people see it, some don't. The fact is on the graph I linked, the 7970 is obviously more "messy" than either of the other two single GPU cards that it's compared against.
You have to understand that what TechReport records is not necessarily what you see. They are using FRAPS to log individual frame times, and this works by a D3D software hook that monitoring the times that IDirect3DDevice9::present() is called. It doesn't monitor changes on the screen like Digital Foundry does in their Eurogamer faceoffs. Games usually take the past few frame steps to predict the next one, so stuttered Present() calls would still have objects move in steps that match unstuttered calls.

If a driver is to minimize input latency, it wants to get information from the game for the next frame as fast as possible (within reason, as it only queues a frame or two). This could result in stuttered calls from the view of FRAPS, but the driver can put the frames on the screen smoothly with delays at its discretion. If, OTOH, the driver wanted to ace TechReport's test, it would have a heuristic to insert variable artificial delays somewhere so that calls to Present() were more even, and then more delays at the end of the pipeline to make sure the frames went to the display smoothly.

You should read their Battlefield 3 Performance article. They get differing results in the same game. With their Fear no Evil test, AMD shows lots of microstutter in FRAPS, but "the Radeons just don't feel much choppier than the GeForces overall." In the Rock and a hard place test, it was the other way around, and "with a GeForce, the latency spikes were very palpable, causing animations seemingly to speed up and slow down wantonly".

Even if the frame times reported by the driver matched those of the display, there's no easy way to summarize it all. Consider the Skyrim 99th percentile numbers you referenced where the 7970 was "worse". Now, look at the actual plot Kaotik posted above. The 99% percentile is virtually the same for all DX11 cards because the slowest 1% were all within the first 600 frames, where it looks like it is CPU/PCI-E bound. After that, however, it is the GTX 580 which shows vastly more stuttering according to FRAPS, despite you claiming the opposite due to the 99th percentile time.

Looking closely, it appears that up to 2500 frames are CPU limited. If you cut those out of the test, the 7970 is then 20% faster than the 580 instead of 10%.
 
I would like to see Techreport throw in mix of either PCIe3.0 system with faster CPU or simply SandyB @4GHz + to see how higher CPU speeds and/or PCIe bus affects each card performance.
It could provide sufficient evidence of how much GFX drivers are dependant on CPU power even in high resolutions usually taken as GPU bound. There is popular believe that AMD is heavier on CPU than nVidia in most games, so seeing frame by frame graphs of each vendor on different speed machines (mainly single core performance matters here) is something I'm interested in :smile:.

Any volunteers?
If not I can do it, just send me HD7970 and GTX580, I will throw in HD6970 to the mix as well.
 
Frames per second doesnt much helping.. stuttering lower than in a second is annoying thats the whole point they make graphs in miliseconds not like [H] style..

the reason of high stuttering on nv cards in those skyrim graphs is maybe because of nv drivers need more cpu power..

No no, I mean "realtime FPS", as in, just frametimes converted to what it would be in FPS at that particular frame (assuming same frametime for whole frame)

So it would look like this (above frametimes converted to "fps equivalent", lower the actual fps style)
gtx275fc2.jpg
 
Wow. I really like that method of benchmark graphing :oops:
Tells you so much more about the performance than the normal fps bars :yes:
And much more things to argue about :).

the reason of high stuttering on nv cards in those skyrim graphs is maybe because of nv drivers need more cpu power..
Possible though typically with completely cpu bound settings nv cards tend to be faster hinting nv driver needs less cpu power.
 
Test done by Kyle over at [H] seem to show diferently, at least in 3 way SLI, and Tri-Fire.

Original test here, with the slower AMD cards being faster than Nvidia

again with faster CPU here, rolls are reversed.
 
Test done by Kyle over at [H] seem to show diferently, at least in 3 way SLI, and Tri-Fire.
I'm not talking about SLI/CF (which brings its own set of problem, and sometimes some solutions may not scale at all with more cards).
Look at this for example, http://ht4u.net/reviews/2011/amd_radeon_hd_7900_southern_island_test/index28.php where the scores don't budge at all between the two lowest resolutions, and the the GTX (both 580 and 560) have a higher hard fps limit than the HD7970. Granted it's just a single game, it could certainly be different in other apps, might find some more scores in cpu rather than gpu reviews...
 
You said that ~1 fps (effective) loss of the HD 7970 can be seen by some people, but on the other hand you refuse to admit, that 20-30% fps advantage of the HD 7970 (over GTX 580) can be any useful...

Exactly, people can't accept that Nvidia cannot always stay on top and sometimes they loose for next gen ATI GPU.
 
Yes, my dellusions, based on facts and graphs by a 3rd party at resolutions the majority of people will be using. How dare I be so bold!

Really, you think the "majority" of people with a GTX 580, 6970, or future 7970 users are going to be at 1920x1200 or below?

I'd argue that the majority of those users got those cards to make games playable either at 2560x1600 (or 2560x1440 now) or to make multi-monitor games playable.

GTX 580 users are ~1.09% of PC users with 6970 at ~0.60% of PC users according to the latest Steam survey report. Heck there's more users at 2560x1440 and higher resolutions than there are combined GTX 580 and 6970 users which suggests that some users like me are happily gaming on a 2560x1600 monitor on a lower card like a 5870.

I'd say anyone spending money for a 580 or 6970 for playing games at 1920x1200 or lower is wasting their money.

Regards,
SB
 
I'd say 1080p HDTV's are somewhat common these days as a gaming display, I know I'm using one. Plenty of games where I can put my OC'd GTX 570 to it's knees even at that res. I won't be putting 500€ to a GPU purchase though.
 
I'm not talking about SLI/CF (which brings its own set of problem, and sometimes some solutions may not scale at all with more cards).
Look at this for example, http://ht4u.net/reviews/2011/amd_radeon_hd_7900_southern_island_test/index28.php where the scores don't budge at all between the two lowest resolutions, and the the GTX (both 580 and 560) have a higher hard fps limit than the HD7970. Granted it's just a single game, it could certainly be different in other apps, might find some more scores in cpu rather than gpu reviews...
Although all the graphs in that say F1 2010 the top does actually say F1 2011 and the results do not appear to line up with the F1 2010 test in their 53 card roundup, which may suggest it is indeed F1 2011. In which case, we did notice late in the game that F1 2011 wasn't performing as expected (i.e. inline with F1 2011) but it was discovered after the initial driver.
 
Back
Top