AMD: Southern Islands (7*** series) Speculation/ Rumour Thread

psolord said:
So after you've watched these two very real life examples, can you honestly tell me, that a single card is better than two cards?
There is no need for me to watch the videos as I never made that claim. Additionally, the effect of the issue at hand is somewhat ameliorated by enabling V-sync, which is fine if you want to use it (but not everyone may want to).

If you are happy with the performance you are getting, that is great. Obviously not everyone may feel exactly the same way as you.

On a personal note, I do find it baffling that anyone would suggest AMD should use a "your [strike]holding[/strike] playing it wrong" response to this matter.
 
Is the games in game AA option FXAA? It doesn't say, it just has an AA on/off toggle. I still see aliasing on some objects and it is somewhat inconsistent, so it may be FXAA.
 
There is no need for me to watch the videos as I never made that claim. Additionally, the effect of the issue at hand is somewhat ameliorated by enabling V-sync, which is fine if you want to use it (but not everyone may want to).

If you are happy with the performance you are getting, that is great. Obviously not everyone may feel exactly the same way as you.

On a personal note, I do find it baffling that anyone would suggest AMD should use a "your [strike]holding[/strike] playing it wrong" response to this matter.

Yes you are right. You didn't specifically said it, sheepdog did so I added his quote, but still you showed the Crysis 3 link and I responded that it runs nicely (albeit with vsync).

I can agree that not all people prefer vsync though, so even if I cannot understand why, I'll leave it at that as personal preference.

These reviews should do frametime analysis on vsynced systems though. Before I bought the 7950 CFX solution, I was like God help me with this system. It seems there was nothing for me to worry about.
 
Last edited by a moderator:
I agree with silent-guy. If runt frames aren't a problem we should just drop every other frame on single GPUs too and not even bother rendering them. Then I could have multi-GPU performance for the price of one! :)

Seriously, though. How much would crossfire performance suffer if this delay was introduced? I doubt it's that significant else nVidia wouldn't have done it and cut their own legs off at the knees.
 
Pitcairn XT single slot sounds pretty nice for a rack cloud game server setup.
 
The Sky 900 (superficially) has the same specs as the FirePro S10000, but has a 300 W TDP instead of 375 W. Makes me wonder what they could do with a 375 W dual-Tahiti Malta.
 
We will need to wait bit for get more infos about RapidFire technology anways.. the presentations was more like short preview.
 
If the shoe was on the other foot and it was Nvidia doing this, people would be lighting their pitchforks.
Really? For what seems to be a correctable (if embarrassing) oversight?

(Heh, "lighting their pitchforks" made me think of burning pitchforks and then Doom 3 BFG Edition.)

Basically these errors double framerates without the card doing as much work. E.g because one frame takes 15 ms to render while another takes 0.3 ms to render(which should be impossible)
This is not what's happening. The runt frames are fully rendered, they just arrive right after/before another frame and are basically wasted rendering thanks to bubbles in the rendering pipeline. A GPU still rendered the whole frame. We're talking Alternate Frame Rendering, not partial frame rendering.

[/quote]When I read anandtechs AMD explanation, I didn't buy the excuse. If the cost of producing low latency frame rates is not a real frame but a runt frame which is a visual artifact, why bother doing it. Its basically a cheat to produce higher fps then should be possible(i.e greater than 100 percent scaling). Do you think a videocard company would simply look at fps, not look into these runt artifacts as the final testing before drivers are released. I think that is naive.

AMD is simply able to get away with it because they are the underdog.[/QUOTE]
Again, a runt frame is a real frame, you just don't see it because of scheduling problems. And how do runt frames produce >100% scaling again? Each GPU is rendering a complete frame. Honestly, how often does CrossFire yield >100% scaling to make you think those occasions are anything other than benchmarking variance?

Given that it will take AMD several months to address this issue, you could argue they didn't have the manpower to tackle it (hence underdog status) and were hoping no one would notice. That'd be a legit complaint. They're no longer getting away with it, though. Now everyone's calling them on it, not just TR. If NV's behind these recent revelations for no other reason than they found it in their drivers first, good on them for exposing a legitimate problem.
 
I don't get the LOL. Is "AMD Gaming Evolved" supposed to be a codeword for "unfair advantage?"

Anyway, more Bioshock Infinite benchmarks, with framerate minimums and multiple driver versions.

Effectively, the reason was most Nvidia have release a driver in between, when AMD will release his second this week ( if possible ).. Nividia have increase of 13.fps is result ( not bad at all, its a big gain )

Then Bioshock use a classic UE3 engine with some DX11 part, nothing really particulary new or taxing ( outside maybe DDOF ( who look to bug a bit ), and hardener contact shadow..
and Nvidia card like UE3 ( note like with all UE3 games, the game is maybe a bit faster yet on nvidia cards, but its a fps droping fest it seems on some parts ( all UE3 games have thoses stutters, fps drop ( just have to read the Bioshock thread on Guru3D lol ).

Note, the memory usage look extremely high, i know the memory usage is a bit dynamic, but i dont understand how the 680 2go keep with that at 2560x1600
( mostly texture use high res + an extremely large environnement. more of 2go have been reported and as high as 3070+ Mo ) ( dont have the game, so i cant test it )
 
Last edited by a moderator:
On the second page, the 7970 GE wins, but I don't speak German so I don't know why the results differ.


there's 2x 680 lines and 2x 7970GE lines, due to drivers update: 314.21 to 314.22 ( who have increase perf on Bioshock ), and 13.3 vs 13.2 beta 7 for the 7970GE.

The 680 gain 15fps by using the 314.22 ( still min fps is at 20fps for the 680 vs 30 for the 7970GE )
 
Something is definitely broken in the systems of those 2 reviews. Almost as if they didn't install the AMD beta drivers properly.
http://gamegpu.ru/action-/-fps-/-tps/bioshock-infinite-test-gpu.html

Nvidia GeForce/ION Driver Release 314.22
AMD Catalyst 13.3 beta 3

bi%201920%20dx11%20ddof.jpg
bi%202560%20dx11%20ddof.jpg
 
I guess the Bioshock bench is more demanding than the gameplay cuz my 6970 is usually around 60 fps on ultra at 1080p. Shadows seem to make it chug, I think . When there are lots of shadowed objects. And the in game "frame rate lock" is a stuttery mess. Use D3DOverrider instead.

I also tried it on a 4850 and it runs very smoothly at 1680x1050 in a D3D9 mode (?). It even looks good. So good I'm not sure what is missing from D3D11 unless I look it up. But it won't allow >low texture resolution on D3D9 (or the 512mb card maybe). The textures still looked good though.

The game looks great. Nice textures and lighting. I love it when text in textures is readable and this game has many very sharp such textures. But as is so frequent the geometric complexity is PS360. Lots of roundness fails lol.
 
I didn't notice that the in-game frame limiter was just v-sync, it says in the description when you properly select it, not just hover over. I'm getting min 23, avg 53 and max 118FPS at 1920x1200 on my 7950 overclocked on my crappy Q9550 at stock 2.83GHz and 4GB DDR2. I've used Radeonpro to cap the frame rate at 45fps and it's very consistent with very little tearing now. I have to admit that when performance varies that much, V-sync is a very bad idea. I would have been dropping down to 15fps far too often, which is horrible. The shifts between 60-30-15 fps were making the game super jerky.

I tried capping to 60fps in Radeonpro but it couldn't maintain it enough of the time so there was still noticeable tearing. At 45 I did notice it being not as fluid in general in static situations but there is hardly any tearing and performance is now very consistent. I've now adapted to the 45fps max and am happy with how my very GPU limited system is handling the game at Ultra settings with alternate post process (max settings).

I also turned off the games FXAA and forced SMAA in Radeonpro. It looks a little better and performance seems the same to me.

EDIT: Just benchmarked with user settings, this includes a FOV increase to around 87 degrees and using SMAA with the frame limiter in place. So yeah, my average is now 42fps and min 20. That's a drop in both (min would be because of FOV and possibly SMAA) but I'm so much happier with the consistent experience and very little noticeable tearing. I think this is something to take away from the recent discussion and controversy. A consistent experience is so much more important than a higher average or max fps. Using a frame limiter or v-sync with crossfire would make it fine from what I've seen. I think from now on I'll be running every game with a forced frame limiter of 60fps if it runs good or is multiplayer (there's no point in going higher than your monitors refresh) and 45fps or even 30fps if it struggles somewhat and I want to limit tearing.
 
Last edited by a moderator:
Back
Top