Trinity vs Ivy Bridge

Crossfire 7970's getting blown away by sli 580's in Skyrim.
Crossfire/SLI performance is completely irrelevant. AFR is stupidity and any performance of these configurations is totally irrelevant to the performance of proper single GPUs.

So AMD performs poorly in these two games especially...and also has broken crossfire drivers? And it just so happens both of these games are made by "nvidia friendly" devs.
But wait, wouldn't that run bad on Intel as well then? My point is you're observing that some games run poorly on AMD and attributing it to drivers, but you have no way of separating driver things from architectural things. The only way you can attribute something to drivers is retrospectively after they have been improved drastically on the same hardware.

By all means lets us see the compelling case you make for your reason why, I'm all ears. I'm quite content to believe I'm wrong but it sure looks like drivers to me and I don't think anyone can hold that against me for believing so considering the weight of evidence.
I think you're missing my point. I'm not saying AMD doesn't do badly in certain games relative to NVIDIA and Intel - obviously they do and vice versa. But you can't attribute that to "drivers" without much more information.

It's not just these games by far. When I look at the BF3 benchmarks and see ~30% in favour of the 680 (vs 7970) on Anandtech, and only 5% in favour on Toms...well it makes me wonder what the real reasons are behind it.
Willing to bet that's just deferred MSAA on vs off. This has been a well-known architectural pain point for AMD relative to NVIDIA... in fact we knew about it even before BF3 was released. So again, not drivers.
 
Crossfire/SLI performance is completely irrelevant. AFR is stupidity and any performance of these configurations is totally irrelevant to the performance of proper single GPUs.

this is especially funny in a discussion on CPUs for subnotebook and good low end desktops.
though I've seen one tasty time graph for the GTX 690 which showed very low stuttering - more important than average fps numbers that are totally meaningless sometimes.
 
though I've seen one tasty time graph for the GTX 690 which showed very low stuttering - more important than average fps numbers that are totally meaningless sometimes.
Yeah sometimes the stars can align and you can actually nearly double your frame *rate* but it's all smoke and mirrors: you don't actually decrease input latency at all. You might as well just be running it through your TV's motion interpolation ;)

And most of the time it's completely broken and gives you a worse experience than even just the single GPU would have anyways :p
 
Last edited by a moderator:
Crossfire/SLI performance is completely irrelevant. AFR is stupidity and any performance of these configurations is totally irrelevant to the performance of proper single GPUs.

I'm constantly amazed at how tech minded people can miss the obvious. If AMD is slacking on crossfire drivers for certain games (believe me they are), what else are they slacking on?

I have 3 screens, eyefinity. Since 12.1 I can no longer overclock my graphics card - no it's not me because I have tried it on two different PC's. The overdrive panel opens once then won't open again unless I reinstall.

If I'm watching something on screen (flash), then switch to more screens, most of the time it hangs on me now. These are issues that weren't affecting me 6 months ago. I could go on and on but I'd rather not, however it's pretty apparent to me that AMD's software department has a lot to do and some things aren't getting done. How far down the list is game optimising now?
 
A 192-Bit 45-55W Trinity would be preferable over a stuttering Dual-Graphics setup with an old 40nm GPU.

Kaveri (still DDR3) should offer such an option, if there is no on-package VRAM.
 
I'm constantly amazed at how tech minded people can miss the obvious. If AMD is slacking on crossfire drivers for certain games (believe me they are), what else are they slacking on?

I have 3 screens, eyefinity. Since 12.1 I can no longer overclock my graphics card - no it's not me because I have tried it on two different PC's. The overdrive panel opens once then won't open again unless I reinstall.

If I'm watching something on screen (flash), then switch to more screens, most of the time it hangs on me now. These are issues that weren't affecting me 6 months ago. I could go on and on but I'd rather not, however it's pretty apparent to me that AMD's software department has a lot to do and some things aren't getting done. How far down the list is game optimising now?
Completely separate issues as all of these are separate resources / teams. Performance optimisation is still number 1, and you can already see lots of progress even in the short time SI has been out. Likewise Crossfire has a decicated team and that is their priority; the lack of a profile for Batman is not an indication of not doing some as they have spent a disproportionate amount of time looking for a profile but the app doesn't work and we have requested the developer make changes.
 
A 192-Bit 45-55W Trinity would be preferable over a stuttering Dual-Graphics setup with an old 40nm GPU.

Kaveri (still DDR3) should offer such an option, if there is no on-package VRAM.

I have another opinion, make it switchable graphics. this already has the advantage of running one set of graphics drivers, vs two on the competition.

clock the integrated GPU low. it's enough for the window manager and google earth and such nonsense.
have a dedicated GPU with gddr5 (full radeon 6670 chip? 7750 derivate?), and disable Trinity's GPU when you switch so more power budget can go to the CPU.
 
Yeah sometimes the stars can align and you can actually nearly double your frame *rate* but it's all smoke and mirrors: you don't actually decrease input latency at all. You might as well just be running it through your TV's motion interpolation ;)

And most of the time it's completely broken and gives you a worse experience than even just the single GPU would have anyways :p

In the past, I've had two HD3870 in Crossfire along with a 2.4GHz Q6600.
The performance of a HD3870 should be comparable to a HD7660G or a HD7650M Turks. The performance of the 2.4GHz Q6600 should be comparable to the 2-module Piledriver in Trinity.


That said, I can personally assure that what is written in the quoted post isn't true. Gameplay experience changed drastically between single and dual-GPU modes, while using the same settings.
The same can be said about the HD4890 CF system I used afterwards.

In fact, I'm actually a bit amazed at how such misguided statements are being made in this discussion. It proves how some are willing to speak such things with little to no knowledge or personal experience on the subject.
 
In fact, I'm actually a bit amazed at how such misguided statements are being made in this discussion. It proves how some are willing to speak such things with little to no knowledge or personal experience on the subject.
It proves that you're willing to use your interpretation of reality to tell everyone else they're wrong, even in the face of empirical proof to the contrary.

We didn't set out to hunt down multi-GPU micro-stuttering. We just wanted to try some new methods of measuring performance, but those methods helped us identify an interesting problem. I think that means we're on the right track, but the micro-stuttering issue complicates our task quite a bit.

Naturally, we contacted the major graphics chip vendors to see what they had to say about the issue. Somewhat to our surprise, representatives from both AMD and Nvidia quickly and forthrightly acknowledged that multi-GPU micro-stuttering is a real problem, is what we measured in our frame-time analysis, and is difficult to address. Both companies said they've been studying this problem for some time, too.
So, even the manufacturers of your graphics cards acknowledge the issue, and yet it doesn't exist?

By the way, I too owned a 3870CF and 4850CF configuration with an E8400 and then later a Q9450. After the 4850's, I went back to using a single card (5850 and now 7970) to get rid of a large pile of CF-related issues -- microstuttering being only one of them, game compatibility being an even larger one.
 
Last edited by a moderator:
It proves that you're willing to use your interpretation of reality to tell everyone else they're wrong, even in the face of empirical proof to the contrary.

First, you probably got the meaning of empirical wrong.
Second, this article isn't using v-sync, which I always keep on, so it won't match my results.
Third, even though v-sync is off:
BeTez.gif


Oh.. what's that? Multi-GPU solutions are getting the best results?! Can it be that my "interpretation of reality" is actually right?:oops:

Regardless:
If the v-sync is off and the multi-GPU is giving a low framerate, that's very perceivable.
If the v-sync is on and the multi-GPU is giving +30 FPS, it's okay.

Yeah, multi-gpu is a pain for those who don't know how to handle it. Then again, multi-gpu isn't really for everyone, is it?


So, even the manufacturers of your graphics cards acknowledge the issue, and yet it doesn't exist?

By the way, I too owned a 3870CF and 4850CF configuration with an E8400 and then later a Q9450. After the 4850's, I went back to using a single card (5850 and now 7970) to get rid of a large pile of CF-related issues -- microstuttering being only one of them, game compatibility being an even larger one.

For me, in 99% of the situations micro-stuttering wasn't a problem because I kept the v-sync always on, and I still do.
I don't get why anyone playing a game would ever want to disable v-sync.

Looking at that article it seems the GPU vendors are proposing a "smart v-sync", which is nothing more than something that v-syncs but at the same time lets the reviewers skyrocket their cards into the 200 FPS that they can't see but look good on paper.
I don't care for that, I just care for playing the game fluidly with nice looks and I couldn't care less for a framerate that surpass my monitor's refresh rate.

Although I do agree with you that game compatibility is an issue (hey, I too left CF solutions a couple of years ago), micro-stuttering isn't. At least not for me, in any practical way.
 
First, you probably got the meaning of empirical wrong.
Nope.

Oh.. what's that? Multi-GPU solutions are getting the best results?! Can it be that my "interpretation of reality" is actually right?:oops:
You cherry-picked the ONLY example where mGPU doesn't completely go ape-shit, and then claim it doesn't exist? Really? Did you notice that in every possible example of microstutter, ATI always fared worse than NV? DId you read the follow-up reviews of even the newest 7xxx series having the same problems? This isn't a single article, it's a repeatable pattern across all mGPU configurations.

Hey, did you notice that your railing me months ago about how Trinity 17W was absolutely, positively four cores didn't pan out?

Is being wrong a third time going to finally be the charm that makes you go do your own research? Or are you just going to continue spewing idiocy?
 
You cherry-picked the ONLY example where mGPU doesn't completely go ape-shit, and then claim it doesn't exist?

The "ONLY example"?
Like, from the whole TWO (2) games that were tested in that article (bulletstorm and SC2), one was favourable to multi-gpus and the other one wasn't.

So if I had shown two ONLY examples, that article didn't even have any meaning to it, did it?

Nonetheless, it's funny how you keep running away from the v-sync argument.
I still don't care for that article, I used v-sync and I still do. It doesn't matter to me, and it shouldn't matter to anyone that is playing instead of benching.

As it seems, you probably threw out your perfectly fine CF setup and made a downgrade because you didn't know that v-sync would solve most of your problems.
Sorry to hear that. Maybe you should've done "your own research" back then?



Hey, did you notice that your railing me months ago about how Trinity 17W was absolutely, positively four cores didn't pan out?

I'm glad to know that you're so dedicated to pursuing everything I said. The initial line-up doesn't have a A10/A8 17W part, but there will be one, as it's been stated before.
When that happens, I may or may not remember this discussion and mock you with it. Probably not, though.
 
I'm constantly amazed at how tech minded people can miss the obvious. If AMD is slacking on crossfire drivers for certain games (believe me they are), what else are they slacking on?
Maybe what is "obvious" to you isn't actually correct? Some of us actually know how these businesses work internally...

That said, I can personally assure that what is written in the quoted post isn't true. Gameplay experience changed drastically between single and dual-GPU modes, while using the same settings.
I did not say anything about "gameplay experience". I say that the *latency* does not improve with SLI/CF. i.e. the amount of time that it takes from when you hit a key to when you see an update on a screen is not improved. So the look of the motion on the screen can appear "smoother", but you will not actually decrease response time. This doesn't have anything to do with drivers or anything else... it's just a basic fact of AFR.

amazed at how such misguided statements are being made in this discussion. It proves how some are willing to speak such things with little to no knowledge or personal experience on the subject.
... really? Sure, if you want to think that I have no personal experience on the subject, fine :D

And you know what, I have nothing against people who like multi-GPU. My objection was the claim that multi-GPU drivers/experience has *any* bearing on anything else to do with the single-GPU experience. It does not.
 
Micro stuttering is still an issue of course especially on AMD GPUs. Look at this graph: http://www.computerbase.de/artikel/...dia-geforce-gtx-690/8/#abschnitt_mikroruckler

It's just one example. Every reviewer which is familiar with this kind of tests will confirm it for you.

:rolleyes:
I wonder if everyone is simply skipping the part where I say "most of the time, it's not a problem when v-sync is on".

If you please, either show us v-synced results that prove me wrong or just don't feed this subject.


Really.. I keep saying that if I turn the 4x4 on in my jeep I'll have no problems going through a muddy road, but you and Albuquerque keep saying I'll slide if I only have traction on two wheels.
Yes, I know. I'm not pretending I won't slide if I only use two wheels. I'm not pretending micro-stutter isn't a problem either v-sync is on or not.
That's why I'm turning the four-wheels mode. That's why I turn v-sync on.


After this, I can't see how any other comment claiming I'm simply "pretending it doesn't exist" isn't just a pure and simple act of trolling.
 
I did not say anything about "gameplay experience". I say that the *latency* does not improve with SLI/CF. i.e. the amount of time that it takes from when you hit a key to when you see an update on a screen is not improved. So the look of the motion on the screen can appear "smoother", but you will not actually decrease response time. This doesn't have anything to do with drivers or anything else... it's just a basic fact of AFR.

Why would it? Does that "latency" change between using slower and faster GPUs?
If not, what's the point in your claim?


... really? Sure, if you want to think that I have no personal experience on the subject, fine :D

And you know what, I have nothing against people who like multi-GPU. My objection was the claim that multi-GPU drivers/experience has *any* bearing on anything else to do with the single-GPU experience. It does not.

Apologies, I misinterpreted your claims based on the previous and following posts regarding multi-GPU rendering.
 
I'm curious about the VLIW4 aspect. Do you guys think they went this route because VLIW4 is space/transistor-efficient, or because it takes longer to design a CPU+GPU and GCN wasn't far enough along? I'm guessing it takes longer to design a CPUGPU.

Also, how does TSMC 28nm compare to GF 32nm? I suppose that's a horribly complicated question.
 
Why would it? Does that "latency" change between using slower and faster GPUs?
Yes, it does. Without vsync always (but you get tearing in exchange). With vsync and double buffering it decreases only if the fps are below the refresh rate and one jumps to another fraction of the refresh rate with the faster GPU (otherwise you have actually a slightly lower latency with the slower GPU :oops:). With triple buffering the faster GPU delivers a lower latency below the refresh rate (always [and also above the refresh rate] without vsync and at least on average [not for each single frame] with vsync). A faster GPU could reduce the latency also when one is stuck at the refresh rate with vsync (but I think this possibility is not used as usually no rendered frame is discarded, but i could be wrong).
 
Back
Top