AMD: Southern Islands (7*** series) Speculation/ Rumour Thread

Discussion in 'Architecture and Products' started by UniversalTruth, Dec 17, 2010.

  1. DavidGraham

    DavidGraham Veteran

    But GCN has them even in Single cards configurations , it is not just multi-GPU related only!
     
  2. Dave Baumann

    Dave Baumann Gamerscore Wh... Moderator Legend

    First off they are different things, Microstutter is not frame latency, and, no there is nothing architectural about either.
     
  3. Albuquerque

    Albuquerque Red-headed step child Veteran

    Again, and just as Dave pointed out, that's not "new" to GCN. This same ordeal exists on VLIW4 and VLIW5 parts.
     
  4. boxleitnerb

    boxleitnerb Regular

    One has to wonder why it took AMD so many years to finally start working on the problem. Nvidia started in 2006 with G80 I believe.
     
  5. LordEC911

    LordEC911 Regular

    Really? A 551mm2 die that is a low volume product vs two 380mm2 sized die that has been in mass production for over a year and a half...

    It is a bit murkier than you think.


    It was towards the end of Tesla's life that Nvidia noticed the problem.
     
  6. Helmore

    Helmore Regular

    Read these articles to get to know more about it if you're interested:
    http://www.anandtech.com/show/6857/amd-stuttering-issues-driver-roadmap-fraps
    http://www.anandtech.com/show/6862/fcat-the-evolution-of-frame-interval-benchmarking-part-1

    In short, AMD pushes frames through as fast as they can. The end result is that you have a rendering sequence that puts frames in a queue for the GPUs capable of pushing a new frame every 10 ms, while a GPU takes 40 ms to render a frame. If you have 2 GPUs and you use AMD's approach, you get alternating times between frames of 30 and 10 ms. NVIDIA solves this by delaying the frames a little to try and get even frame times. The end result is a little extra input lag, but a more even frame distribution. AMD has said that they will provide a driver that gives you the option for lower input lag or more even frame distribution.
    Simply a difference in philosophy I think. AMD focused on latency while NVIDIA focused on frame pacing.

    @LordEC911 - Tesla has been dead for quite a while :p. (He died in 1943). Also, Tahiti is around 352 mm², not 380, but that's nitpicking.
     
  7. kalelovil

    kalelovil Regular

    The previous generations multi-GPU solutions weren't any better, see how poorly the 590 and particularly the 6990 do in this Techreport review: http://techreport.com/review/22890/nvidia-geforce-gtx-690-graphics-card/7
     
  8. UniversalTruth

    UniversalTruth Veteran

    I think they will learn one day the hard way that they are playing with the fire which can be the cause of go bankrupt ;)

    Hehe, this is funny. He perhaps meant GeForce Tesla, aka GT, GTX 280 times ...
     
  9. lanek

    lanek Veteran


    In reality, the term suffer is a bit hard... Nvidia use a technic who smooth graph.. in some way it is better, ( metering, or 1 frame delay ).. It is excellent on smooth graph, it is excellent to smooth too the output in some case ( basically for AFR ) but sadly it can be too the worst thing with AFR when the output of the frame metered delay is too long ( there you see a real "pause".. input lag ( the feeling your mouse is not anymore responding 1millisecond.. not just a question of smoothness feeling,.. it is more like your system have do a little break ). What i think interessant is Nvidia have never intend to make any marketing with that.. if they had this available since Kepler, we could have think they will intend to market it.. but nothing.. we have need wait some reviews sites for somewhere, peoples tend to see it ..

    Somewhere if you say to someone.. search the dog , he will search the dog. when normally he will just walk along the street.

    In general peoples use v-sync, TripleBuffer specially with CFX or SLI, because you dont want see tearing.. and with a card who can provide 120fps+ at 1080p on most recent games... you will see many tearing. the experience of benchmark is not the same of the user experience. personally i put the max details available, i can even inject sweetFX, when it is possible in term of minimum framerates, i use supersampling, or at least edge detect AA ( 16-24-32x AA ).. because i will use all the ressources of my system for just keep a minimum of 60fps ( v-sync on my good old panel ). But most games without it, im a lot, a lot higher.
     
    Last edited by a moderator: Apr 25, 2013
  10. sheepdogexpress

    sheepdogexpress Newcomer

    Titans is almost certainly more expensive to build than a 7990. Especially when you take R and D into consideration.

    If Titan was as cheap to build as a a couple of 7970 chips, we would have seen it massed produced I could imagine.
     
  11. swaaye

    swaaye Entirely Suboptimal Legend

    I only meant that the Titan board is probably cheaper to build than the big 7990. I'm ignoring the costs associated with GK110 R&D and manufacturing challenges.

    Actually I've been wondering if GK110 die fabrication is suffering in similar ways as GF100.... Hence the disabled units on $1000 Titan.
     
  12. silent_guy

    silent_guy Veteran Subscriber

    As discussed earlier: for a steady state situation, it's sufficient to insert a small delay just once to get rid of the frame time imbalance. If this is what Nvidia is doing, there doesn't have to be a fixed additional delay. So let's not blindly assume that the increased lag is true. It's very well possible that it's not the case.
     
  13. RedVi

    RedVi Regular

    This is 100% due to Nvidia's frame metering tech. You'd have to compare older hardware from Nvidia to see if it quite as bad in regards to micro stuttering as AMD's. AMD are working on their own frame metering technology. I believe the 690 has it implemented in hardware (?) so AMD's driver solution may not be as effective as that.
     
  14. DuckThor Evil

    DuckThor Evil Legend

    Titan isn't fully enabled chip though and while not hardware that gaming bundle has to cost quite a bit to include. Also if it's true that GTX 780 is GK110 based, the low volume part becomes questionable too.
     
  15. LordEC911

    LordEC911 Regular

    I highly doubt that GTX780 is Titan LE because that would force them to price it under $600.


    And yes, when I meant Tesla I was talking about the architecture, around the end of GTX280/285's life.
     
  16. DuckThor Evil

    DuckThor Evil Legend

    Well imo it can't be GK104 based and it doesn't seem like they are bringing in a new chip, so at the moment I think GK110 based is most likely to be true. $599 would work.

    oops, just noticed this is the Southern Island thread. How did this happen :)
     
    Last edited by a moderator: Apr 25, 2013
  17. The GTX480 (partially enabled) & GTX580 (fully enabled) were both priced under $600 so there should be no problem for the GTX780 (partially enabled GTX110) to also be priced under $600.
     
  18. LordEC911

    LordEC911 Regular

    Yes but why price something for $600 when you can price it at $800 and have something else at $549-$599?
     
  19. Kaotik

    Kaotik Drunk Member Legend

    How much can they OC the 680 for a new product? It's hard to justify some $549-599 product based on 680 when 7970GE is under $449
     
  20. jimbo75

    jimbo75 Veteran

    It's hard to justify $459 680's but that's what Nvidia gets away with pricing them at anyway.
     
Loading...

Share This Page

Loading...