AMD: Southern Islands (7*** series) Speculation/ Rumour Thread

Oh no, this is nothing "new" to GCN. MIcrostutter has been around for a long time, arguably it has existed since AFR solutions were first brought forth no matter which vendor you are speaking about.
But GCN has them even in Single cards configurations , it is not just multi-GPU related only!
 
First off they are different things, Microstutter is not frame latency, and, no there is nothing architectural about either.
 
Titan's also undoubtedly cheaper to build.
Really? A 551mm2 die that is a low volume product vs two 380mm2 sized die that has been in mass production for over a year and a half...

It is a bit murkier than you think.


One has to wonder why it took AMD so many years to finally start working on the problem. Nvidia started in 2006 with G80 I believe.
It was towards the end of Tesla's life that Nvidia noticed the problem.
 
Do we even know the reason for these insane micro-stuttering graphs?(never seen anything like them!) Why AMD suffers this with the GCN architecture in particular ?

Read these articles to get to know more about it if you're interested:
http://www.anandtech.com/show/6857/amd-stuttering-issues-driver-roadmap-fraps
http://www.anandtech.com/show/6862/fcat-the-evolution-of-frame-interval-benchmarking-part-1

In short, AMD pushes frames through as fast as they can. The end result is that you have a rendering sequence that puts frames in a queue for the GPUs capable of pushing a new frame every 10 ms, while a GPU takes 40 ms to render a frame. If you have 2 GPUs and you use AMD's approach, you get alternating times between frames of 30 and 10 ms. NVIDIA solves this by delaying the frames a little to try and get even frame times. The end result is a little extra input lag, but a more even frame distribution. AMD has said that they will provide a driver that gives you the option for lower input lag or more even frame distribution.
One has to wonder why it took AMD so many years to finally start working on the problem. Nvidia started in 2006 with G80 I believe.
Simply a difference in philosophy I think. AMD focused on latency while NVIDIA focused on frame pacing.

@LordEC911 - Tesla has been dead for quite a while :p. (He died in 1943). Also, Tahiti is around 352 mm², not 380, but that's nitpicking.
 
Do we even know the reason for these insane micro-stuttering graphs?(never seen anything like them!) Why AMD suffers this with the GCN architecture in particular ?


In reality, the term suffer is a bit hard... Nvidia use a technic who smooth graph.. in some way it is better, ( metering, or 1 frame delay ).. It is excellent on smooth graph, it is excellent to smooth too the output in some case ( basically for AFR ) but sadly it can be too the worst thing with AFR when the output of the frame metered delay is too long ( there you see a real "pause".. input lag ( the feeling your mouse is not anymore responding 1millisecond.. not just a question of smoothness feeling,.. it is more like your system have do a little break ). What i think interessant is Nvidia have never intend to make any marketing with that.. if they had this available since Kepler, we could have think they will intend to market it.. but nothing.. we have need wait some reviews sites for somewhere, peoples tend to see it ..

Somewhere if you say to someone.. search the dog , he will search the dog. when normally he will just walk along the street.

In general peoples use v-sync, TripleBuffer specially with CFX or SLI, because you dont want see tearing.. and with a card who can provide 120fps+ at 1080p on most recent games... you will see many tearing. the experience of benchmark is not the same of the user experience. personally i put the max details available, i can even inject sweetFX, when it is possible in term of minimum framerates, i use supersampling, or at least edge detect AA ( 16-24-32x AA ).. because i will use all the ressources of my system for just keep a minimum of 60fps ( v-sync on my good old panel ). But most games without it, im a lot, a lot higher.
 
Last edited by a moderator:
Really? A 551mm2 die that is a low volume product vs two 380mm2 sized die that has been in mass production for over a year and a half...

It is a bit murkier than you think.



It was towards the end of Tesla's life that Nvidia noticed the problem.

Titans is almost certainly more expensive to build than a 7990. Especially when you take R and D into consideration.

If Titan was as cheap to build as a a couple of 7970 chips, we would have seen it massed produced I could imagine.
 
I only meant that the Titan board is probably cheaper to build than the big 7990. I'm ignoring the costs associated with GK110 R&D and manufacturing challenges.

Actually I've been wondering if GK110 die fabrication is suffering in similar ways as GF100.... Hence the disabled units on $1000 Titan.
 
In reality, the term suffer is a bit hard... Nvidia use a technic who smooth graph.. in some way it is better, ( metering, or 1 frame delay ).. It is excellent on smooth graph, it is excellent to smooth too the output in some case ( basically for AFR ) but sadly it can be too the worst thing with AFR when the output of the frame metered delay is too long ( there you see a real "pause".. input lag ( the feeling your mouse is not anymore responding 1millisecond.. not just a question of smoothness feeling,..
As discussed earlier: for a steady state situation, it's sufficient to insert a small delay just once to get rid of the frame time imbalance. If this is what Nvidia is doing, there doesn't have to be a fixed additional delay. So let's not blindly assume that the increased lag is true. It's very well possible that it's not the case.
 
Looking at comparison graphs, the problem's literally orders of magnitude worse on GCN than the equivalent NV setup though. Even with the new "experimental" driver it's far, far worse.

This is 100% due to Nvidia's frame metering tech. You'd have to compare older hardware from Nvidia to see if it quite as bad in regards to micro stuttering as AMD's. AMD are working on their own frame metering technology. I believe the 690 has it implemented in hardware (?) so AMD's driver solution may not be as effective as that.
 
Really? A 551mm2 die that is a low volume product vs two 380mm2 sized die that has been in mass production for over a year and a half...

It is a bit murkier than you think.

Titan isn't fully enabled chip though and while not hardware that gaming bundle has to cost quite a bit to include. Also if it's true that GTX 780 is GK110 based, the low volume part becomes questionable too.
 
Titan isn't fully enabled chip though and while not hardware that gaming bundle has to cost quite a bit to include. Also if it's true that GTX 780 is GK110 based, the low volume part becomes questionable too.

I highly doubt that GTX780 is Titan LE because that would force them to price it under $600.


And yes, when I meant Tesla I was talking about the architecture, around the end of GTX280/285's life.
 
I highly doubt that GTX780 is Titan LE because that would force them to price it under $600.

Well imo it can't be GK104 based and it doesn't seem like they are bringing in a new chip, so at the moment I think GK110 based is most likely to be true. $599 would work.

oops, just noticed this is the Southern Island thread. How did this happen :)
 
Last edited by a moderator:
Yes but why price something for $600 when you can price it at $800 and have something else at $549-$599?

How much can they OC the 680 for a new product? It's hard to justify some $549-599 product based on 680 when 7970GE is under $449
 
Back
Top