AMD: R9xx Speculation

Discussion in 'Architecture and Products' started by Lukfi, Oct 5, 2009.

  1. hatter

    Newcomer

    Joined:
    Dec 26, 2009
    Messages:
    32
    Likes Received:
    0
    But I would have liked to see an option to disable PowerTune. It has its benefits but just in case users want to push the card to max
     
  2. cho

    cho
    Regular

    Joined:
    Feb 9, 2002
    Messages:
    416
    Likes Received:
    2
    RV1070 = 300mm^2
    RV1080 = 400mm^2

    :twisted:
     
  3. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,428
    Likes Received:
    426
    Location:
    New York
    While in some cases it's not a big improvement over Cypress, that's expected given it's on the same process node and that some changes won't benefit all games. This launch was damaged more by excessive pre-release hype than anything particularly wrong with the cards.
     
  4. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,797
    Likes Received:
    2,056
    Location:
    Germany
    While I agree, that if forced to distill Powertune into half a sentence I'd probably chose similar words, it seems much more elaborate than Nvidias solution. Plus, the user selectable Overdrive function to partially ignore the set limits makes it much more useful. I'd only wish, I could go to, say, +30 or +40% instead of just +20. In one particular test, GPU-z showed me an average clock of 871 MHz over the duration despite the Powertune slider being at +20%. :)

    We have those alright, don't we? :)
     
  5. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,122
    Likes Received:
    2,873
    Location:
    Well within 3d
    Major architectural changes are introduced to market several years after the designers made a prediction what changes would pay off in the final design.
    If the chip design pipeline waits until a feature pays off, it won't come out for two or so years.
    The choice is to pick possible winners as best you can way before you know the answer, or guarantee you are late to the party.
     
  6. NathansFortune

    Regular

    Joined:
    Mar 3, 2009
    Messages:
    559
    Likes Received:
    0
    I agree that it is more complex, but if you break it down to its bare elements then it is just an anti-Furmark switch.

    Other than a couple of comments here and there I haven't seen any articles along those lines, and definitely not from Charlie, he seems to think 6970 and PowerTune are the second coming...
     
  7. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,428
    Likes Received:
    426
    Location:
    New York
    Is it more complex? According to Anand nVidia has some hardware monitoring in place whereas AMD is employing a usage based formula to derive an estimate of power consumption. I've never heard of any hardware monitoring on GF110, thought it was simply application profile based.
     
  8. Rangers

    Legend

    Joined:
    Aug 4, 2006
    Messages:
    12,322
    Likes Received:
    1,120
    Nvidia's engineering is looking smarter all the time lately...their whole architecture introduced with 8800 is looking better all the time. It now appears AMD is having to "catch up" to them in many areas, which is bloating AMD die size without adding speed.

    For AMD's part imo they need to drop the whole small die pretense already imo. They might look a lot better if they could use up to 500 mm^2 in the first place. The only potential issue I guess I could see is such large dies could hamper the use of X2 cards due to power issues, as Nvidia has faced...but then again they might not. I dont get the sense 5970 is just killing it in sales anyway.

    However, if 28nm is truly delayed a lot, I see it being a whole lot more problem for Nvidia...they are truly at max die size already more or less, so they literally have nowhere to go. Theoretically OTOH AMD has ~140 mm^2 left to play with. Theoretically imo AMD could introduce a next gen chip on 40nm, while Nvidia could not. A smart AMD would definitely use that imo to great advantage, but I doubt they will as they seem cautious and not playing to win again.

    I worry about AMD if they are beginning to lose the plot in GPU's. They've been behind in CPU's of course for a long time but at least the GPU division seemed on the ball. But now cracks appear. I dont see AMD doing anything at all in the super important mobile phone type space either, while both INTEL and Nvidia do.
     
  9. Mintmaster

    Veteran

    Joined:
    Mar 31, 2002
    Messages:
    3,897
    Likes Received:
    87
    It seems to work sometimes, but not always. That's why I'm hopeful that it's just drivers.

    Many setup bound games (you're not CPU bound, yet tripling the resolution only increased frame time by 50%) aren't showing improvement over the 6870. Cayman should have a larger improvement over its predecessors at low resolution, assuming memory and CPU aren't problems at either.
     
  10. UniversalTruth

    Veteran

    Joined:
    Sep 5, 2010
    Messages:
    1,747
    Likes Received:
    22

    Why, to optimise?!? I don't see any problem here.

    [​IMG]



    [​IMG]




    http://www.techpowerup.com/reviews/HIS/Radeon_HD_6970/14.html
     
  11. gongo

    Regular

    Joined:
    Jan 26, 2008
    Messages:
    582
    Likes Received:
    12
    My $370 seems to have died a little after reading those reviews....very very confusing launch results...perhaps the delays were due to the drivers...but definitely not the CHIL VRM (all HD69xx uses Voltera)...lolz...now i know which sites to take with a pinch of salt....and who were the ones hyping Unigine and GTX580 killing results...i think AMD kinda shot themselves with so much secrecy leading up to the 15th...
     
  12. Dave Baumann

    Dave Baumann Gamerscore Wh...
    Moderator Legend

    Joined:
    Jan 29, 2002
    Messages:
    14,079
    Likes Received:
    648
    Location:
    O Canada!
    There really aren't that many apps that are setup bound. You can see that from from any straight geometry tests the dual geometry engnes in termsof setup are working fine. There are more improvementsto come from the drivers with regards to the tessellation changes though.
     
  13. Rangers

    Legend

    Joined:
    Aug 4, 2006
    Messages:
    12,322
    Likes Received:
    1,120
    Some other sites Dirt 2 was doing a lot worse, such as...

    [​IMG]

    It seems like one of those outlying games like Hawx/Hawx 2 and Lost Planet 2 where AMD just gets crushed.

    F1 I never mentioned as an outlier.
     
  14. flopper

    Newcomer

    Joined:
    Nov 10, 2006
    Messages:
    150
    Likes Received:
    6
    and besides,
    looking at 5760x1080 then 6950x2 beats 580 gtx sli.
    http://www.hardwareheaven.com/revie...s-card-review-crossfire-eyefinity-vs-sli.html

    kinda good deal, half the price for better performance...
     
  15. UniversalTruth

    Veteran

    Joined:
    Sep 5, 2010
    Messages:
    1,747
    Likes Received:
    22
    Future proof? Anyone? It seems they are designed with future heavy tesselation applications in mind.

    Here is example. Look at the jump Radeon HD5870---->>> Radeon HD6970


    [​IMG]




    [​IMG]
     
  16. air_ii

    Newcomer

    Joined:
    May 2, 2007
    Messages:
    134
    Likes Received:
    0
    TPU benched Dirt2 in DX9 mode...
     
  17. gongo

    Regular

    Joined:
    Jan 26, 2008
    Messages:
    582
    Likes Received:
    12
    Check out Amazon prices!

    6970 goes for $480
    6950 goes for $370

    ..... now was there ever a last minute price adjustment from AMD...after Nvidia surprised them with the 580 and 570 double whammy?? Guess if one were to speculate about AMD plans for "profits" (per Dave comments earlier in this thread), that did not turned out too well....kinda make some sense with the renaming bits...wonder if they had stuck with 67xx and 68xx from the start...doesnt seemed to make any difference now?
     
  18. A1xLLcqAgt0qc2RyMz0y

    Regular

    Joined:
    Feb 6, 2010
    Messages:
    985
    Likes Received:
    277
    That is not going to happen as AMD just released their next generation (32nm) part on 40nm the 6970.

    I am also under the impression that it takes 2 years to go from design of a new GPU to production so unless AMD started this next generation 40nm design 1 1/2 years ago (and they didn't) there will be no 6970+ coming in 2011.
     
  19. no-X

    Veteran

    Joined:
    May 28, 2005
    Messages:
    2,297
    Likes Received:
    247
    I don't think so. Looking at ComputerBase, HD6970 is faster than HD5870 by:
    16% at 1920×1200 / AA 4x
    31% at 2560×1600 / AA 4x

    die-size increased from 334mm² to 389mm² = 16%

    And this is comparision of drivers which were polished for 1 year with fresh and evidently buggy driver. I think this launch is very similar to R520's launch...
     
  20. mczak

    Veteran

    Joined:
    Oct 24, 2002
    Messages:
    3,012
    Likes Received:
    112
    Hmm this isn't very convincing. There's some good stuff but overall it doesn't look like a very efficient chip, compared to in-house competition (it still does ok against competition).
    Most sites have about a 10% difference between HD 6870 and HD 6950 overall (with HD5870 being as fast as HD 6950), and another 10% between HD 6950 and HD 6970 - about 20% between full Barts and full Cayman then. Cayman cards are definitely priced to reflect performance, though.
    Despite Cayman having a 50% larger die size, 31% more memory bandwidth, way more simds (and also more peak alu rate), and also a power draw which is definitely larger than the increased performance would indicate (probably directly related to the increased die size / transistors). Granted it has definitely improved geometry setup / tesselation (with the latter still giving errorneous results in some tests where Barts is actually faster probably due to drivers).
    Also the 10% difference between HD 6950 and HD 6970 is very small, corresponding exactly to clock increase (core and mem). If Cypress had very bad simd scaling, Cayman seems to have non-existent simd scaling - anyone bench the cards at same clock? I think I'll stick to the theory that once you go past 8 or so simds per graphics engine (or rasterizer in case of Evergreen) things don't really improve much. Also maybe speculation about not quite sufficient internal bandwidth could be true, it would certainly only get worse if you add more simds (I haven't seen anything indicating bandwidth has improved for Cayman). So maybe the VLIW-4 simds would be more efficient than VLIW-5, but since the simds hardly scale at all it is a wasted effort for this chip to have more (but smaller) simds.
    Compared to Cypress, it isn't that bad, but still die area and transistors increased more than performance. Granted, the two graphics engine are definitely warranted for increased tesselation performance (and it pays off in some titles using tesselation) but overall I just don't think it's very efficient.

    There's also some good stuff, Powertune imho has tremendous potential in the mobile space I think, but for desktop it's not nearly as important.
    Cayman was initially planned for 32nm right? If so I can only wonder what (if anything) was sacrificed for 40nm - I think on 32nm there would be room for some more things even when not exceeding 300mm² (why not 4 GE with 8 simds each and doubled internal bandwidth :) ).
     
    #6640 mczak, Dec 15, 2010
    Last edited by a moderator: Dec 15, 2010
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...